Solving Optimal Neural Layout by Gibbs Sampling
نویسندگان
چکیده
Neural systems of organisms derive their functionality largely from the numerous and intricate connections between individual components. These connections are costly and have been refined via evolutionary pressure that acts to maximize their functionality while minimizing the associated cost. This tradeoff can be formulated as a constrained optimization problem. In this paper, we use simulated annealing, implemented through Gibbs sampling, to investigate the minimal cost placement of individual components in neural systems. We show that given the constraints and the presumed cost function associated with the neural interconnections, we can find the configuration corresponding to the minimal cost. We restrict the mechanisms considered to those involving incremental improvement through local interactions since real neural systems are likely to be subject to such constraints. By adjusting the cost function and comparing with the actual configuration in neural systems, we can infer the actual cost function associated with the connections used by nature. This provides a powerful tool to biologists for investigating the configurations of neural systems.
منابع مشابه
A Neural Network Method Based on Mittag-Leffler Function for Solving a Class of Fractional Optimal Control Problems
In this paper, a computational intelligence method is used for the solution of fractional optimal control problems (FOCP)'s with equality and inequality constraints. According to the Ponteryagin minimum principle (PMP) for FOCP with fractional derivative in the Riemann- Liouville sense and by constructing a suitable error function, we define an unconstrained minimization problem. In the optimiz...
متن کاملGD-GIBBS: a GPU-based sampling algorithm for solving distributed constraint optimization problems
Researchers have recently introduced a promising new class of Distributed Constraint Optimization Problem (DCOP) algorithms that is based on sampling. This paradigm is very amenable to parallelization since sampling algorithms require a lot of samples to ensure convergence, and the sampling process can be designed to be executed in parallel. This paper presents GPU-based D-Gibbs (GD-Gibbs), whi...
متن کاملApplication of MCMC Algorithm in Interest Rate Modeling
Interest rate modeling is a challenging but important problem in financial econometrics. This work is concerned with the parameter estimation of the short term interest models. In light of a recent development in Markov Chain Monte Carlo simulation techniques based on Gibbs sampling, numerical experimentations are carried out for finding an effective and convergent Beyesian estimation scheme. T...
متن کاملA Position Paper on Statistical Inference Techniques Which Integrate Neural Network and Bayesian Network Models
Some statistical methods which have been shown to have direct neural network analogs are surveyed here; we discuss sampling, optimization, and representation methods which make them feasible when applied in conjunction with, or in place of, neural networks. We present the foremost of these, the Gibbs sampler, both in its successful role as a convergence heuristic derived from statistical physic...
متن کاملAccelerated Gibbs sampling of normal distributions using matrix splittings and polynomials
Standard Gibbs sampling applied to a multivariate normal distribution with a specified precision matrix is equivalent in fundamental ways to the Gauss–Seidel iterative solution of linear equations in the precision matrix. Specifically, the iteration operators, the conditions under which convergence occurs, and geometric convergence factors (and rates) are identical. These results hold for arbit...
متن کامل